-
Notifications
You must be signed in to change notification settings - Fork 752
Testing Fix/issue 400 clean #474
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
tylerslaton
wants to merge
24
commits into
main
Choose a base branch
from
fix/issue-400-clean
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Replace fragile usage_metadata-based logic with robust streaming detection that checks multiple explicit streaming indicators. **Problem:** The original logic relied on `not adk_event.usage_metadata` to determine if an event should be processed as streaming. This was fragile because Claude models can include usage_metadata even in streaming chunks, causing responses to disappear. **Solution:** Implement comprehensive streaming detection that checks: - `partial` attribute (explicitly marked as partial) - `turn_complete` attribute (live streaming completion status) - `is_final_response()` method (final response indicator) - `finish_reason` attribute (fallback for content without finish reason) This ensures all streaming content is captured regardless of usage_metadata presence, fixing compatibility with Claude Sonnet 4 and other models. **Testing:** ✅ All 277 tests pass ✅ Streaming detection works across different model providers 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <[email protected]>
…k-agent Add regression test for partial final ADK chunks
Change TextMessageContentEvent to TextMessageChunkEvent in test to match actual AG-UI protocol event types. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <[email protected]>
…ttranslator Add test for ADK streaming fallback branch
…e for streaming event
@tylerslaton I see the failures - I'll take a look this evening. |
The Tool Based Generative UI haiku test was exhibiting flaky behavior where it would sometimes pass and sometimes fail with the same test conditions. The test was more reliable when run with --headed than when run headless, suggesting a timing-related issue. Root cause: The extractMainDisplayHaikuContent() method was concatenating ALL visible haiku lines from the main display, while the chat extraction only captured the most recent haiku. When multiple haikus were displayed simultaneously (due to rendering timing), this caused mismatches. Fix: Modified extractMainDisplayHaikuContent() to extract only the last 3 lines (the most recent haiku), matching the behavior of the chat extraction and eliminating timing-related flakiness. This affects all 10 platform integration tests that use ToolBaseGenUIPage. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <[email protected]>
Setup Workload Identity Federation (cherry picked from commit 979b3dc)
🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <[email protected]>
Add fallback logic to detect streaming completion using finish_reason when is_final_response returns False but finish_reason is set. **Problem:** Gemini returns events with partial=True and is_final_response()=False even on the final chunk that contains finish_reason="STOP". This caused streaming messages to remain open and require force-closing, resulting in warnings. **Solution:** Enhanced should_send_end logic to check for finish_reason as a fallback: - Check if finish_reason attribute exists and is truthy - If streaming is active and finish_reason is present, emit TEXT_MESSAGE_END - Formula: should_send_end = (is_final_response and not is_partial) or (has_finish_reason and self._is_streaming) **Testing:** ✅ All 277 tests pass ✅ Added test_partial_with_finish_reason to verify the fix ✅ Eliminates "Force-closing unterminated streaming message" warnings ✅ Properly emits TEXT_MESSAGE_END for events with finish_reason 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <[email protected]>
- Prefer LRO routing in ADKAgent when long‑running tool call IDs are present in event.content.parts (prevents misrouting into streaming path and tool loops; preserves HITL pause) - Force‑close any active streaming text before emitting LRO tool events (guarantees TEXT_MESSAGE_END precedes TOOL_CALL_START) - Harden EventTranslator.translate to filter out long‑running tool calls from the general path; only emit non‑LRO calls (avoids duplicate tool events) - Add tests: * test_lro_filtering.py (translator‑level filtering + LRO‑only emission) * test_integration_mixed_partials.py (streaming → non‑LRO → final LRO: order, no duplicates, correct IDs)
Closed
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
testing #471